skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Engdahl, Nicholas"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available February 18, 2026
  2. Modern hydrologic models have extraordinary capabilities for representing complex process in surface-subsurface systems. These capabilities have revolutionized the way we conceptualize flow systems, but how to represent uncertainty in simulated flow systems is not as well developed. Currently, characterizing model uncertainty can be computationally expensive, in part, because the techniques are appended to the numerical methods rather than seamlessly integrated. The next generation of computers, however, presents opportunities to reformulate the modeling problem so that the uncertainty components are handled more directly within the flow system simulation. Misconceptions about quantum computing abound and they will not be a “silver bullet” for solving all complex problems, but they might be leveraged for certain kinds of highly uncertain problems, such as groundwater (GW). The point of this issue paper is that the GW community could try to revise the foundations of our models so that the governing equations being solved are tailored specifically for quantum computers. The goal moving forward should not just be to accelerate the models we have, but also to address their deficiencies. Embedding uncertainty into the models by evolving distribution functions will make predictive GW modeling more complicated, but doing so places the problem into a complexity class that is highly efficient on quantum computing hardware. Next generation GW models could put uncertainty into the problem at the very beginning of a simulation and leave it there throughout, providing a completely new way of simulating subsurface flows. 
    more » « less
  3. Abstract. Lagrangian particle tracking schemes allow a wide range of flow and transport processes to be simulated accurately, but a major challenge is numerically implementing the inter-particle interactions in an efficient manner. This article develops a multi-dimensional, parallelized domain decomposition (DDC) strategy for mass-transfer particle tracking (MTPT) methods in which particles exchange mass dynamically. We show that this can be efficiently parallelized by employing large numbers of CPU cores to accelerate run times. In order to validate the approach and our theoretical predictions we focus our efforts on a well-known benchmark problem with pure diffusion, where analytical solutions in any number of dimensions are well established. In this work, we investigate different procedures for “tiling” the domain in two and three dimensions (2-D and 3-D), as this type of formal DDC construction is currently limited to 1-D. An optimal tiling is prescribed based on physical problem parameters and the number of available CPU cores, as each tiling provides distinct results in both accuracy and run time. We further extend the most efficient technique to 3-D for comparison, leading to an analytical discussion of the effect of dimensionality on strategies for implementing DDC schemes. Increasing computational resources (cores) within the DDC method produces a trade-off between inter-node communication and on-node work.For an optimally subdivided diffusion problem, the 2-D parallelized algorithm achieves nearly perfect linear speedup in comparison with the serial run-up to around 2700 cores, reducing a 5 h simulation to 8 s, while the 3-D algorithm maintains appreciable speedup up to 1700 cores. 
    more » « less
  4. null (Ed.)